加州无罪项目(CIP)是一个旨在获得自由被错误定罪的囚犯的临床法学学校计划,评估数千封邮件,其中包含了新请求的帮助和相应的案件文件。处理和解释这一大量信息对CIP官员提出了重大挑战,这可以通过主题建模技术成功地辅助。在本文中,我们应用非负矩阵分解(NMF)方法并实现重要的各种分支机构先前未捕获的数据集由CIP编译。我们识别现有案例文件的基础主题,并按犯罪类型和案例状态(判定类型)对请求文件进行分类。结果揭示了当前案例文件的语义结构,可以在进一步考试之前为新收到的案例文件提供CIP官员。我们还提供了对NMF的流行变体进行了实验结果,并通过现实世界应用探讨了每个变体的益处和缺点。
translated by 谷歌翻译
通常,使用网络编码在物理,生物,社会和信息科学中应用程序中复杂系统中实体之间的交互体系结构。为了研究复杂系统的大规模行为,研究网络中的中尺度结构是影响这种行为的构件。我们提出了一种新方法来描述网络中的低率中尺度结构,并使用多种合成网络模型和经验友谊,协作和蛋白质 - 蛋白质相互作用(PPI)网络说明了我们的方法。我们发现,这些网络拥有相对较少的“潜在主题”,可以成功地近似固定的中尺度上网络的大多数子图。我们使用一种称为“网络词典学习”(NDL)的算法,该算法结合了网络采样方法和非负矩阵分解,以学习给定网络的潜在主题。使用一组潜在主题对网络进行编码的能力具有多种应用于网络分析任务的应用程序,例如比较,降解和边缘推理。此外,使用我们的新网络去核和重建(NDR)算法,我们演示了如何通过仅使用直接从损坏的网络中学习的潜在主题来贬低损坏的网络。
translated by 谷歌翻译
假设在某个时期,我们在未知图上为我们提供了一个耦合振荡器的系统以及系统的轨迹。我们可以预测系统最终是否同步?即使具有已知的基础图结构,这通常是一个重要但在分析上棘手的问题。在这项工作中,我们通过将其视为分类问题,基于任何给定系统最终将最终同步或收敛到非同步极限周期的事实来采用另一种方法来对同步预测问题。通过仅使用基础图(例如边缘密度和直径)的一些基本统计数据,当同步示例与非同步示例之间的基础图之间存在显着差异时,我们的方法可以达到完美的准确性。但是,在问题设置中,这些图形统计信息无法很好地区分这两个类(例如,当图形是从同一随机图模型生成的图形时),我们发现将初始动力学的一些迭代与图形统计数据配对为我们分类算法的输入可以导致准确性的显着提高;远远超过了经典振荡器理论所知的。更令人惊讶的是,我们发现在几乎所有此类设置中,删除了基本的图形统计信息,并仅使用初始动态来训练我们的算法几乎具有相同的精度。我们在三个连续和离散耦合振荡器的模型上演示了我们的方法 - 库拉莫托模型,萤火虫蜂窝自动机和绿色啤酒模型。最后,我们还提出了一种“集合预测”算法,该算法通过对从多个随机子图观察到的动力学进行训练,成功地将我们的方法扩展到大图。
translated by 谷歌翻译
Understanding the relationship between structure and sentiment is essential in highlighting future operations with online social networks. More specifically, within popular conversation on Twitter. This paper provides a development on the relationship between the two variables: structure, defined as the composition of a directed network, and sentiment, a quantified value of the positive/negative connotations of a conversation. We highlight thread sentiment to be inversely proportional to the strength and connectivity of a network. The second portion of this paper highlights differences in query types, specifically how the aforementioned behavior differs within four key query types. This paper focuses on topical, event-based, geographic, and individual queries as orientations which have differing behavior. Using cross-query analysis, we see that the relationship between structure and sentiment, though still inversely proportional, differs greatly across query types. We find this relationship to be the most clear within the individual queries and the least prevalent within the event-based queries. This paper provides a sociological progression in our understanding of opinion and networks, while providing a methodological advancement for future studies on similar subjects.
translated by 谷歌翻译
We present temporally layered architecture (TLA), a biologically inspired system for temporally adaptive distributed control. TLA layers a fast and a slow controller together to achieve temporal abstraction that allows each layer to focus on a different time-scale. Our design is biologically inspired and draws on the architecture of the human brain which executes actions at different timescales depending on the environment's demands. Such distributed control design is widespread across biological systems because it increases survivability and accuracy in certain and uncertain environments. We demonstrate that TLA can provide many advantages over existing approaches, including persistent exploration, adaptive control, explainable temporal behavior, compute efficiency and distributed control. We present two different algorithms for training TLA: (a) Closed-loop control, where the fast controller is trained over a pre-trained slow controller, allowing better exploration for the fast controller and closed-loop control where the fast controller decides whether to "act-or-not" at each timestep; and (b) Partially open loop control, where the slow controller is trained over a pre-trained fast controller, allowing for open loop-control where the slow controller picks a temporally extended action or defers the next n-actions to the fast controller. We evaluated our method on a suite of continuous control tasks and demonstrate the advantages of TLA over several strong baselines.
translated by 谷歌翻译
Data deprivation, or the lack of easily available and actionable information on the well-being of individuals, is a significant challenge for the developing world and an impediment to the design and operationalization of policies intended to alleviate poverty. In this paper we explore the suitability of data derived from OpenStreetMap to proxy for the location of two crucial public services: schools and health clinics. Thanks to the efforts of thousands of digital humanitarians, online mapping repositories such as OpenStreetMap contain millions of records on buildings and other structures, delineating both their location and often their use. Unfortunately much of this data is locked in complex, unstructured text rendering it seemingly unsuitable for classifying schools or clinics. We apply a scalable, unsupervised learning method to unlabeled OpenStreetMap building data to extract the location of schools and health clinics in ten countries in Africa. We find the topic modeling approach greatly improves performance versus reliance on structured keys alone. We validate our results by comparing schools and clinics identified by our OSM method versus those identified by the WHO, and describe OSM coverage gaps more broadly.
translated by 谷歌翻译
We present a new algorithm for automatically bounding the Taylor remainder series. In the special case of a scalar function $f: \mathbb{R} \mapsto \mathbb{R}$, our algorithm takes as input a reference point $x_0$, trust region $[a, b]$, and integer $k \ge 0$, and returns an interval $I$ such that $f(x) - \sum_{i=0}^k \frac {f^{(i)}(x_0)} {i!} (x - x_0)^i \in I (x - x_0)^{k+1}$ for all $x \in [a, b]$. As in automatic differentiation, the function $f$ is provided to the algorithm in symbolic form, and must be composed of known elementary functions. At a high level, our algorithm has two steps. First, for a variety of commonly-used elementary functions (e.g., $\exp$, $\log$), we derive sharp polynomial upper and lower bounds on the Taylor remainder series. We then recursively combine the bounds for the elementary functions using an interval arithmetic variant of Taylor-mode automatic differentiation. Our algorithm can make efficient use of machine learning hardware accelerators, and we provide an open source implementation in JAX. We then turn our attention to applications. Most notably, we use our new machinery to create the first universal majorization-minimization optimization algorithms: algorithms that iteratively minimize an arbitrary loss using a majorizer that is derived automatically, rather than by hand. Applied to machine learning, this leads to architecture-specific optimizers for training deep networks that converge from any starting point, without hyperparameter tuning. Our experiments show that for some optimization problems, these hyperparameter-free optimizers outperform tuned versions of gradient descent, Adam, and AdaGrad. We also show that our automatically-derived bounds can be used for verified global optimization and numerical integration, and to prove sharper versions of Jensen's inequality.
translated by 谷歌翻译
A typical product or place often has hundreds of reviews, and summarization of these texts is an important and challenging problem. Recent progress on abstractive summarization in domains such as news has been driven by supervised systems trained on hundreds of thousands of news articles paired with human-written summaries. However for opinion texts, such large scale datasets are rarely available. Unsupervised methods, self-training, and few-shot learning approaches bridge that gap. In this work, we present a novel self-training approach, OpineSum, for abstractive opinion summarization. The summaries in this approach are built using a novel application of textual entailment and capture the consensus of opinions across the various reviews for an item. This method can be used to obtain silver-standard summaries on a large scale and train both unsupervised and few-shot abstractive summarization systems. OpineSum achieves state-of-the-art performance in both settings.
translated by 谷歌翻译
The applicability of computational models to the biological world is an active topic of debate. We argue that a useful path forward results from abandoning hard boundaries between categories and adopting an observer-dependent, pragmatic view. Such a view dissolves the contingent dichotomies driven by human cognitive biases (e.g., tendency to oversimplify) and prior technological limitations in favor of a more continuous, gradualist view necessitated by the study of evolution, developmental biology, and intelligent machines. Efforts to re-shape living systems for biomedical or bioengineering purposes require prediction and control of their function at multiple scales. This is challenging for many reasons, one of which is that living systems perform multiple functions in the same place at the same time. We refer to this as "polycomputing" - the ability of the same substrate to simultaneously compute different things. This ability is an important way in which living things are a kind of computer, but not the familiar, linear, deterministic kind; rather, living things are computers in the broad sense of computational materials as reported in the rapidly-growing physical computing literature. We argue that an observer-centered framework for the computations performed by evolved and designed systems will improve the understanding of meso-scale events, as it has already done at quantum and relativistic scales. Here, we review examples of biological and technological polycomputing, and develop the idea that overloading of different functions on the same hardware is an important design principle that helps understand and build both evolved and designed systems. Learning to hack existing polycomputing substrates, as well as evolve and design new ones, will have massive impacts on regenerative medicine, robotics, and computer engineering.
translated by 谷歌翻译
Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets. Despite promising results, current models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application. Several recent efforts attempt to address this by devising models that automatically detect factual inconsistencies in machine generated summaries. However, they focus exclusively on English, a language with abundant resources. In this work, we leverage factual consistency evaluation models to improve multilingual summarization. We explore two intuitive approaches to mitigate hallucinations based on the signal provided by a multilingual NLI model, namely data filtering and controlled generation. Experimental results in the 45 languages from the XLSum dataset show gains over strong baselines in both automatic and human evaluation.
translated by 谷歌翻译